I had the privilege of creating and then leading the Future of Humanity Institute at Oxford University. It started with just three researchers back in 2005, and grew to about fifty at its peak, before its closure in 2024. The FHI had a unique intellectual culture and was a magnet for some of the most exceptionally brilliant (and often eccentric) minds in the world. Despite its small scale and relatively short lifespan, the FHI produced an astonishing string of fundamental advances, which amount to a new way of thinking about big-picture questions for humanity. I imagine that perhaps Plato’s Academy in ancient Athens, or some artistic circle in the heyday of Renaissance Florence, might have enjoyed a similar spirit of aliveness and intellectual fertility. In addition to its research accomplishments, the FHI community spawned a number of academic offshoots, nonprofits, and other initiatives that continue to thrive. It helped incubate the AI safety research field, the existential risk and rationalist communities, and the effective altruism movement. Its alumni have gone on to hold prominent positions in other institutions.
FHI was eventually shut down in 2024, following several years of increasing bureaucratic strangulation from the local faculty administration. Today there is a much broader support base for the kind of work FHI was set up to enable, so the institute essentially served its purpose. But those of us fortunate enough to have been there during its early or peak years will cherish memories of this academic anomaly that was the FHI.
As for my own research, this webpage itself is perhaps the best summary. Aside from my work related to artificial intelligence (on safety, ethics, governance, and strategic implications), I have also originated or contributed to the development of ideas such as the simulation argument, existential risk, transhumanism, information hazards, astronomical waste, crucial considerations, observation selection effects in cosmology and other contexts of self-locating belief, anthropic shadow, differential technological development, the unilateralist’s curse, the parliamentary model of decision-making under normative uncertainty, the notion of a singleton, the vulnerable world hypothesis, the cosmic host, alongside analyses of future technological capabilities and concomitant ethical issues, risks, and opportunities. Somewhat more recently, I’ve been doing work on the moral and political status of digital minds, on some issues in metaethics, and on life and value in a post-instrumental condition (see Deep Utopia: Life and Meaning in a Solved World). These days I’m mostly thinking about things related to AGI and superintelligence.
I’ve noticed that misconceptions about my work sometimes arise when people have only read portions of it. Examples include the belief that I’m a gung-ho transhumanist, or that I’m anti-AI, or that I’m a consequentialist fanatic who would favor any policy purported to mitigate some existential risk. I suspect one cause of such errors is that many of my papers investigate particular aspects of some complex issue, or trace out the implications that would follow from some particular set of assumptions. This is an analytic strategy: carve out parts of a problem where one can see how to make intellectual progress, make that progress, and then return to see if one can find ways to make additional parts tractable. My actual overall views on the challenges confronting us are far more nuanced, complex, and tentative. This has always been the case, and it has become even more so as I’ve mellowed with age. I’ve never been temperamentally inclined towards strong ideological positions or indeed isms of any kind.